Numerical methods for sparse recovery

نویسندگان

  • Massimo Fornasier
  • Ronny Ramlau
  • Gerd Teschke
چکیده

These lecture notes are an introduction to methods recently developed for performing numerical optimizations with linear model constraints and additional sparsity conditions to solutions, i.e. we expect solutions which can be represented as sparse vectors with respect to a prescribed basis. Such a type of problems has been recently greatly popularized by the development of the field of nonadaptive compressed acquisition of data, the so-called compressed sensing, and its relationship with l1-minimization. We start our presentation by recalling the mathematical setting of compressed sensing as a reference framework for developing further generalizations. In particular we focus on the analysis of algorithms for such problems and their performances. We introduce and analyse the homotopy method, the iteratively reweighted least square method, and the iterative hard thresholding algorithm. We will see that the properties of convergence of these algorithms to solutions depends very much on special spectral properties (Restricted Isometry Property or Null Space Property) of the matrices which define the linear models. This provides a link to the courses of Holger Rauhut and Jared Tanner who will address in detail the analysis of such properties from different points of view. The concept of sparsity does not necessarily affect the entries of a vector only, but it can also be applied, for instance, to their variation. We will show that some of the algorithms proposed for compressed sensing are in fact useful for optimization problems with total variation constraints. Usually these optimizations on continuous domains are related to the calculus of variations on bounded variation (BV) functions and to geometric measure theory, which will be the objects of the course by Antonin Chambolle. In the second part of the lecture notes we address sparse optimizations in Hilbert spaces, and especially for situations when no Restricted Isometry Property or Null Space Property are assumed. We will be able to formulate efficient algorithms based on iterative soft-thresholding also for such situations, althought their analysis will require different tools, typically from nonsmooth convex analysis. The course by Ronny Ramlau, Gerd Teschke, and Mariya Zhariy addresses further developments of these algorithms towards regularization in nonlinear inverse problems as well as adaptive strategies. A common feature of the illustrated algorithms will be their variational nature, in the sense that they are derived as minimization strategies of given energy functionals. Not only the variational framework allows us to derive very precise statements about the convergence properties of these algorithms, but it also provides the algorithms with an intrinsic robustness. We will finally address large scale computations, showing how we can define domain decomposition strategies for these nonsmooth optimizations, for problems coming from compressed sensing and l1-minimization as well as for total variation minimization problems. The first part of the lecture notes is elementary and it does not require more than the basic knowledge of notions of linear algebra and standard inequalities. The second part of the course is slightly more advanced and addresses problems in Hilbert spaces, and we will make use of more advanced concepts of nonsmooth convex analysis.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Anti-measurement Matrix Uncertainty Sparse Signal Recovery for Compressive Sensing

Compressive sensing (CS) is a technique for estimating a sparse signal from the random measurements and the measurement matrix. Traditional sparse signal recovery methods have seriously degeneration with the measurement matrix uncertainty (MMU). Here the MMU is modeled as a bounded additive error. An anti-uncertainty constraint in the form of a mixed 2  and 1  norm is deduced from the sparse ...

متن کامل

An improved strategy for solving Sudoku by sparse optimization methods

We proposed several strategies to improve the sparse optimization methods for solving Sudoku puzzles. Further, we defined a new difficult level for Sudoku. We tested our proposed methods on Sudoku puzzles data-set. Numerical results showed that we can improve the accurate recovery rate from 84%+ to 99%+ by the L1 sparse optimization method.

متن کامل

Exact Recovery of Hard Thresholding Pursuit

The Hard Thresholding Pursuit (HTP) is a class of truncated gradient descent methods for finding sparse solutions of l0-constrained loss minimization problems. The HTP-style methods have been shown to have strong approximation guarantee and impressive numerical performance in high dimensional statistical learning applications. However, the current theoretical treatment of these methods has trad...

متن کامل

A New IRIS Segmentation Method Based on Sparse Representation

Iris recognition is one of the most reliable methods for identification. In general, itconsists of image acquisition, iris segmentation, feature extraction and matching. Among them, iris segmentation has an important role on the performance of any iris recognition system. Eyes nonlinear movement, occlusion, and specular reflection are main challenges for any iris segmentation method. In thi...

متن کامل

A New IRIS Segmentation Method Based on Sparse Representation

Iris recognition is one of the most reliable methods for identification. In general, itconsists of image acquisition, iris segmentation, feature extraction and matching. Among them, iris segmentation has an important role on the performance of any iris recognition system. Eyes nonlinear movement, occlusion, and specular reflection are main challenges for any iris segmentation method. In thi...

متن کامل

A Unified Approach to Model Selection and Sparse Recovery

Model selection and sparse recovery are two important problems for which many regularization methods have been proposed. We study the properties of regularization methods in both problems under the unified framework of regularized least squares with concave penalties. For model selection, we establish conditions under which a regular-ized least squares estimator enjoys a nonasymptotic property,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009